翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

power iteration : ウィキペディア英語版
power iteration
In mathematics, the power iteration is an eigenvalue algorithm: given a matrix ''A'', the algorithm will produce a number λ (the eigenvalue) and a nonzero vector ''v'' (the eigenvector), such that ''Av'' = λ''v''.
The algorithm is also known as the Von Mises iteration.〔Richard von Mises and H. Pollaczek-Geiringer,
''Praktische Verfahren der Gleichungsauflösung'', ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik 9, 152-164 (1929).〕
The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when ''A'' is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.
==The method==
The power iteration algorithm starts with a vector ''b''0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation
: b_ = \frac.
So, at every iteration, the vector ''b''''k'' is multiplied by the matrix ''A'' and normalized.
If we assume ''A'' has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b_0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence \left( b_ \right) converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence \left( b_ \right) does not necessarily converge. In this sequence,
: b_k = e^ v_1 + r_k,
where v_1 is an eigenvector associated with the dominant eigenvalue, and \| r_ \| \rightarrow 0. The presence of the term e^ \right) does not converge unless e^ \right) defined by
: \mu_ = \fracAb_}b_}
converges to the dominant eigenvalue.
This can be run as a simulation program with the following simple algorithm:

for each(''simulation'')

The value of ''norm'' converges to the absolute value of the dominant eigenvalue, and the vector ''b'' to an associated eigenvector.
''Note:'' The above code assumes real A,b. To handle complex; A()() becomes conj(A()()), and tmp()
*tmp() becomes conj(tmp())
*tmp()
This algorithm is the one used to calculate such things as the Google PageRank.
The method can also be used to calculate the spectral radius (the largest eigenvalue) of a matrix by computing the Rayleigh quotient
: \frac = \frac.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「power iteration」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.